perm filename COMMON[F84,JMC] blob
sn#801132 filedate 1985-09-03 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00010 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 %common[f84,jmc] What is common sense? for AI Magazine
C00003 00003 notes:
C00007 00004 fragments - Ambiguity tolerance
C00010 00005 Prevention
C00012 00006 Ontology -
C00014 00007 Mental qualities
C00015 00008 Reasoning about information
C00016 00009 Appearance and reality:
C00019 00010 Appearance and reality:
C00021 ENDMK
C⊗;
%common[f84,jmc] What is common sense? for AI Magazine
\title{WHAT IS COMMON SENSE?}
Common sense includes a certain body of knowledge of the world and
a certain
ability to reason with it. Humans have it, and humans will have to
understand it well enough to put it in computer programs if we are ever to
have, for example, useful robot servants.
notes:
1. relation between an entity and its elaboration, e.g. between going
to New York and the details of running to the gate, looking at the
TV screen for the gate, etc.
2. relation between towers and the basic facts about moving objects
3. The relation between logic and thought needs to be discussed. Logic
legitimizes the conclusions of thought but requires more precision.
event(nyt1)
trip(nyt1)
occurred(nyt1,before(1984-oct-8))
origin(nyt1) = Stanford
remark: Can we leave nyt1 incompletely characterized, not merely in
the sense that we haven't given all its characteristics, but in the
sense that we haven't decided whether it is to be considered as
starting at Stanford in a general sense or more specifically at
my office at a particular time. If we regard nyt1 as a concept
of a trip, maybe that will make it easier.
destination(nyt1) = JFK
included(nyt2,nyt1)
destination(nyt2) = SFO
or at(SFO,destination(nyt2))
vehicle(nyt1,boeing1)
type(boeing1, boeing 747)
The details of a plan are elaborated to the point where the remainder
can be determined at the time. The extreme is where certain detailed
decisions are left to reflexes, i.e. subintelligent and sublogical
mechanisms. If an unexpected obstacle arises, the sublogical mechanism
may be superseded by the monitoring logical process. Another possibility
is that the emergency arises in the formation of the plan, i.e. via
taking a fact into account that prevents a customary circumscription.
Even though making programs with common sense reasoning
ability and providing them with commons sense knowledge has been
studied in AI for almost 30 years, the subject hasn't advanced
very far. Moreover, what is known is spotty. For this reason,
it isn't possible to discuss the various aspects of common sense
at a length and detail proportional to their importance.
This article doesn't discuss the various aspects of common
sense knowledge and ability in proportion to their
importance. My knowledge is too incomplete for that. Where there
has been formalization, I will outline it. Where there has not I
have to content myself with English language examples of the
common sense knowledge. Where I can do it, I will also give examples
of common sense reasoning, but that is even more spotty.
While much common sense knowledge is readily expressed in English,
it seems that important parts of it are not. At least they aren't
ordinarily so expressed.
fragments - Ambiguity tolerance
The formalism we use for expressing facts put into a common
sense database must possess {\it ambiguity tolerance}. Let's illustrate
this with data concerning trips, e.g. airplane or automobile trips.
We can ask where and when a particular trip from Stanford to Austin is
considered to begin. Is its beginning determined to the hour or
second or just to the day? Does it begin when I walk out the door
of my home or office and is the place of origin of the trip so
accurately determined? What must the common sense database formulas
assume about this? If we use the expression $beginplace(trip)$, do
the axioms involving it make it definite to the nearest foot?
It seems to me that we need axioms that somehow evade the issue.
Sometimes it's definite and sometimes it isn't. For example,
Stanford University regulations concerning the payment of per diem
suppose that the times are defined to a quarter of a day. For some
other purpose, e.g. trip insurance, there may be another definition.
Still other purposes, such as one's availability for another activity
require other notions.
Therefore, the axioms in the database need \{it ambiguity tolerance}
in the sense that for the many of them, it shouldn't matter how
precisely these notions are considered to be defined. For other
purposes it may matter, but different purposes may require different
notions.
The term {\it ambiguity tolerance} was first used by
Hubert Dreyfuss in his book {\it What Computers Can't Do}, and
he supposed that computers intrinsically couldn't have it.
Our ideas for achieving ambiguity tolerance are based on
non-monotonic reasoning. We have no general solution, but here
are some examples where non-monotonic reasoning can help.
Prevention
In the trash can problem we want to prevent the dogs from
overturning the trash cans, i.e. we want to make it impossible for
them to do it. It seems to be harder to express our common sense
knowledge about when something is impossible than knowledge about
what actions will achieve a goal. Anyway I don't know any work on
it.
The actual reasoning in the trash can problem seems to be
highly non-monotonic. We start with a specific action by the dogs
that overturns a trash can and create a situation in which that
action no longer has the unwanted effect. We then jump to the
conclusion that this action is the only action a dog can do that
will overturn the can. Experience with similar inferences is
mixed. Sometimes we succeed in frustrating an animal or person
from achieving a goal and sometimes it or he finds another way.
In any case, making the conjecture that frustrating the action
will prevent achieving the goal is reasonable.
∀s c .hanging(c,s) ⊃ ab aspect73(dog,can,s)
hanging(c,s) ⊃ ¬overturned(c,result(does(dog,jumpon(c)),s))
¬tooheavy x ∧ etc. ⊃ overturned(x,result(does(dog,jumpon(x)),s))
Ontology -
In philosophy {\it ontology} is the study of the entities
that exist, and it went out of fashion in the twentieth century.
In the 1950s Quine offered a definition that has been useful in
logic, philosophy and now in AI. Namely, the ontology of a theory
or a program is the range of the variables. Both formalized theories
and AI programs often have very limited ontologies. The mathematical
examples include the ``elementary theories'' of algebraic structures.
{\it Elementary} here means that the variables range over elements of
the structures and do not range over other entities, e.g. substructures.
Elementary theories often have interesting metamathematical properties,
from which certain mathemical properties can sometimes be derived.
See (Barwise 1983), especially Barwise's introductory chapter.
***examples of weak ontology of AI programs>, Mycin, individual
robbers but not sets of robbers, colors, but can't say color is a property
or even the colors are.
red(x)
color(x) = red
colors = {red, green, blue}
color ε properties(physical objects)
Human thought and language is ontologically rich. We can
form abstractions like love and justice, or on the more mundane
level appropriate for AI, *** and ***.
Mental qualities
Reasoning about information
People reason a lot about information, where it is,
who has it, how to get it and how to transmit it to others.
Limiting other people's information and even misleading them
plays a lesser role but still has to be understood.
Here are some English language examples of information about
information.
English language examples:
``His number is (isn't) in the telephone book''.
``The newspaper accusation is anonymous''.
Appearance and reality:
Humans and robots receive through their sense organs
only limited information about the objects and phenomena with
which they must interact. A large part of our activity
involves inferring facts about objects and phenomena from
sensory information. Moreover, this inference often involves
physically active processes, e.g. turning the head, poking something,
asking questions and even scientific experiments.
We have a large number of pattern-action rules for going
from appearance to reality, but we rightly don't regard them as
providing the basic information about the relation between the two.
There are too many ways in which such rules can be fooled and often
are.
Common sense is even further from acting according to the positivist
injunction to regard appearance as fundamental and reality as its structure.
We have evolved in a complicated world in only small
part of which is accessible to our sense organs. For example,
we have no sense organ that permits direct access to the three
dimensional structure of objects. Vision and touch tell us only
about surfaces and sound gives very accidental information about
the world. In building robots we face similar limitations.
Our information about how objects and phenomena generate
apperances is more reliable that the rules that go from appearance
to reality. This is because appearances are phenomena in the
world concerning the relations between an observer and the
rest of the world. Therefore, the facts about the causes of
appearances involve entities that may not be directly observable.
Appearance and reality:
One of the most common common sense activities is inferring facts
about the world from appearances. By {\it appearance}
we mean more than just vision --- also sound, touch, and facts that themselves
are inferred from appearance and other knowledge. For example, if it is
inferred that someone is displeased, then in inferring the cause of his
displeasure, the displeasure itself serves as an appearance.
The relation between appearance and reality is complicated.
For example, vision only shows us the surfaces of objects. Their
interiors must be inferred by taking them apart or from general rules
ultimately developed from taking other objects apart. The inference
of scientific theories is even more complex in its dependence on
large numbers of observations and previous theories.
We have many pattern action rules that go from appearance
to inferring reality. They work quickly but are subject to illusions
of many kinds. Our knowledge that goes from reality to appearance has
fewer exceptions but is not so immediately usable.